在本章中,我们确定了基本的几何结构,这些几何结构是采样,优化,推理和自适应决策问题的基础。基于此识别,我们得出了利用这些几何结构来有效解决这些问题的算法。我们表明,在这些领域中自然出现了广泛的几何理论,范围从测量过程,信息差异,泊松几何和几何整合。具体而言,我们解释了(i)如何利用汉密尔顿系统的符合性几何形状,使我们能够构建(加速)采样和优化方法,(ii)希尔伯特亚空间和Stein操作员的理论提供了一种通用方法来获得可靠的估计器,(iii)(iii)(iii)保留决策的信息几何形状会产生执行主动推理的自适应剂。在整个过程中,我们强调了这些领域之间的丰富联系。例如,推论借鉴了抽样和优化,并且自适应决策通过推断其反事实后果来评估决策。我们的博览会提供了基本思想的概念概述,而不是技术讨论,可以在本文中的参考文献中找到。
translated by 谷歌翻译
Recently, there has been great interest in connections between continuous-time dynamical systems and optimization algorithms, notably in the context of accelerated methods for smooth and unconstrained problems. In this paper we extend this perspective to nonsmooth and constrained problems by obtaining differential inclusions associated to novel accelerated variants of the alternating direction method of multipliers (ADMM). Through a Lyapunov analysis, we derive rates of convergence for these dynamical systems in different settings that illustrate an interesting tradeoff between decaying versus constant damping strategies. We also obtain perturbed equations capturing fine-grained details of these methods, which have improved stability and preserve the leading order convergence rates.
translated by 谷歌翻译
The field of robotics, and more especially humanoid robotics, has several established competitions with research oriented goals in mind. Challenging the robots in a handful of tasks, these competitions provide a way to gauge the state of the art in robotic design, as well as an indicator for how far we are from reaching human performance. The most notable competitions are RoboCup, which has the long-term goal of competing against a real human team in 2050, and the FIRA HuroCup league, in which humanoid robots have to perform tasks based on actual Olympic events. Having robots compete against humans under the same rules is a challenging goal, and, we believe that it is in the sport of archery that humanoid robots have the most potential to achieve it in the near future. In this work, we perform a first step in this direction. We present a humanoid robot that is capable of gripping, drawing and shooting a recurve bow at a target 10 meters away with considerable accuracy. Additionally, we show that it is also capable of shooting distances of over 50 meters.
translated by 谷歌翻译
State-of-the-art brain tumor segmentation is based on deep learning models applied to multi-modal MRIs. Currently, these models are trained on images after a preprocessing stage that involves registration, interpolation, brain extraction (BE, also known as skull-stripping) and manual correction by an expert. However, for clinical practice, this last step is tedious and time-consuming and, therefore, not always feasible, resulting in skull-stripping faults that can negatively impact the tumor segmentation quality. Still, the extent of this impact has never been measured for any of the many different BE methods available. In this work, we propose an automatic brain tumor segmentation pipeline and evaluate its performance with multiple BE methods. Our experiments show that the choice of a BE method can compromise up to 15.7% of the tumor segmentation performance. Moreover, we propose training and testing tumor segmentation models on non-skull-stripped images, effectively discarding the BE step from the pipeline. Our results show that this approach leads to a competitive performance at a fraction of the time. We conclude that, in contrast to the current paradigm, training tumor segmentation models on non-skull-stripped images can be the best option when high performance in clinical practice is desired.
translated by 谷歌翻译
Bi-encoders and cross-encoders are widely used in many state-of-the-art retrieval pipelines. In this work we study the generalization ability of these two types of architectures on a wide range of parameter count on both in-domain and out-of-domain scenarios. We find that the number of parameters and early query-document interactions of cross-encoders play a significant role in the generalization ability of retrieval models. Our experiments show that increasing model size results in marginal gains on in-domain test sets, but much larger gains in new domains never seen during fine-tuning. Furthermore, we show that cross-encoders largely outperform bi-encoders of similar size in several tasks. In the BEIR benchmark, our largest cross-encoder surpasses a state-of-the-art bi-encoder by more than 4 average points. Finally, we show that using bi-encoders as first-stage retrievers provides no gains in comparison to a simpler retriever such as BM25 on out-of-domain tasks. The code is available at https://github.com/guilhermemr04/scaling-zero-shot-retrieval.git
translated by 谷歌翻译
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of $k$-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
translated by 谷歌翻译
在机器学习中,使用算法 - 不足的方法是一个新兴领域,用于解释单个特征对预测结果的贡献。尽管重点放在解释预测本身上,但已经做了一些解释这些模型的鲁棒性,即每个功能如何有助于实现这种鲁棒性。在本文中,我们建议使用沙普利值来解释每个特征对模型鲁棒性的贡献,该功能以接收器操作特性(ROC)曲线和ROC曲线(AUC)下的面积来衡量。在一个说明性示例的帮助下,我们证明了解释ROC曲线的拟议思想,并可以看到这些曲线中的不确定性。对于不平衡的数据集,使用Precision-Recall曲线(PRC)被认为更合适,因此我们还演示了如何借助Shapley值解释PRC。
translated by 谷歌翻译
通用近似定理断言,单个隐藏层神经网络在紧凑型集合上具有任何所需的精度,可以近似连续函数。作为存在的结果,通用近似定理支持在各种应用程序中使用神经网络,包括回归和分类任务。通用近似定理不仅限于实现的神经网络,而且还具有复杂,季节,Tessarines和Clifford值的神经网络。本文扩展了广泛的超复杂性神经网络的通用近似定理。确切地说,我们首先介绍非分类超复杂代数的概念。复数,偶数和苔丝是非分类超复合代数的示例。然后,我们陈述了在非分类代数上定义的超复合值的神经网络的通用近似定理。
translated by 谷歌翻译
我们介绍MR-NET,这是一种用于多分辨率神经网络的一般体系结构,也是基于此体系结构进行成像应用的框架。我们的基于坐标的网络在空间和规模上都是连续的,因为它们由多个阶段组成,这些阶段逐渐增加了更细节。除此之外,它们是一个紧凑而有效的表示。我们展示了多分辨率图像表示以及用于纹理放大和缩小以及抗脉化的应用。
translated by 谷歌翻译
主成分分析(PCA)是信号处理中无处不在的维度降低技术,搜索一个投影矩阵,该矩阵最小化了还原数据集和原始数据集之间的平方误差。由于经典的PCA并非量身定制用于解决与公平性有关的问题,因此其对实际问题的应用可能会导致不同群体的重建错误(例如,男人和女人,白人和黑人等)的差异,并带来可能有害的后果,例如引入偏见对敏感群体。尽管最近提出了几种公平的PCA版本,但在搜索算法中仍然存在基本差距,这些算法足够简单,可以部署在实际系统中。为了解决这个问题,我们提出了一种新颖的PCA算法,该算法通过一个简单的策略来解决公平问题,该策略包括一维搜索,该搜索利用了PCA的封闭形式解决方案。如数值实验所证明的那样,该提案可以通过总体重建误差的损失很小,而无需诉诸复杂的优化方案,从而显着提高公平性。此外,我们的发现在几种真实情况以及在具有不平衡和平衡数据集的情况下是一致的。
translated by 谷歌翻译